Độ phức tạp thuật toán - Big O Notation in JavaScript

độ phức tạp thuật toán javascript

The graph of Big O Notation





A Big O(1) is as good as it gets for algorithms. It's like, algorithm paradise. But many times, we can't achieve this.

The best Big O we can achieve when reversing an array is Big O(n), where we can see from the blue line, the execution time (or number of operations the function has to do) increases linearly with input size.

Usually an algorithm with linear time (blue line) is good - in most cases it will be absolutely fine. while not as efficient as constant time (green line), it is often the best we can do, and it's a lot better than big O(n^2) (red line) which we'll look at next.

Example 1 – Constant time complexity: Big O(1)

Here is a simple function called timesTwo:

function timesTwo ( num ) {
return 2 * num
}
let result = timesTwo ( 5 ) // 10
let result2 = timesTwo ( 2000 ) // 4000

It takes a whole number as an argument, and then just returns 2 times that number. So, if we pass in 5, then it will return 2 times 5, which is 10. And if we pass in 2000, then it will return 2 times 2000, which is 4000.

Now, which of these do you think will take the longest to compute? 2 * 5 or 2 * 2000? You wouldn't have been stupid to have guessed 2 * 2000 takes longer than 2 * 5, but you'd be wrong!

In JavaScript, both take the same time. It's just one operation (one multiplication). 20 * 2 billion takes as long as 2 * 3.

No matter what number we input into this function, it takes the same amount of time to compute. The algorithm is said to have a Big O(1) – pronounced “Big O of 1” - which is known as constant time complexity; no matter the size of the input, the function takes the same amount of time to compute.

But say we had a function with two operations, like the one below, where we input a number, multiply it by 4, save it to a variable called total, and return total times 3. So, all together, we have two multiplications in this function.

function manyTimes(num) {
let total = 4 * num
return total * 3
}

Now, we wouldn’t say this function has a Big O of 2, it’d still just be a Big O of 1 because we’re looking at the big picture (1 operation isn’t gonna take significantly longer than 2 for a computer so we can just ignore it). No matter what we put in, the number of operations won’t increase in the function, it’s constant time.

What is space complexity and time complexity?

Before we go through another example, let’s get some definitions down:

“Time complexity”: analysing how the runtime of an algorithm changes as the input increases.

The algorithms we went through above had constant Big O time complexity because the runtime of the algorithms did not increase as the input size increased.

“Space complexity” (aka auxiliary space complexity): The space required by the algorithm, not including inputs.

The algorithms above had constant space complexity. The function timesTwo didn’t store any values in memory. manyTimes only stored one value in memory: total. No matter what the input, both of these algorithms have a constant space complexity because if we increase the size of the input, the space in memory remains the same.

Constant time and space complexity is as good as an algorithm can get, but it’s not always possible.

There is usually a trade-off between space complexity and time complexity: to increase the speed of an algorithm, you’ll likely need to store more variables in memory.

In this article, I’ll mostly focus on time complexity as that is usually what you’ll be most concerned with.

Now for another example…

Example 2 – Linear time complexity: Big O(n)

Below, we have a function called reverseArray, which loops over the input array starting at the last item, and builds up a new array which ends up being the input array reversed.

function reverseArray(arr) {
let newArr = []
for (let i = arr.length - 1; i >= 0; i--) {
newArr.push(arr[i])
}
return newArr
}
const reversedArray1 = reverseArray([1, 2, 3]) // [3, 2, 1]
const reversedArray2 = reverseArray([1, 2, 3, 4, 5, 6]) // [6, 5, 4, 3, 2, 1]

If we input an array with the elements 1, 2, 3, it returns [3,2,1]. And if we input [1, 2 , 3 ,4 ,5, 6], it returns [6, 5, 4, 3, 2, 1]. Now, which of these do you think will take longer to compute – if we input an array 3 items long, or an array 6 items long?

This time, you are correct if you said the longer array of 6 elements. And that is because in this algorithm, we are looping over each element in the array, and then pushing that element onto a new array. That’s 2 extra operations for every extra element we have in the array.

So, if we pass in the array with 3 elements, there will be 6 operations in total. If we pass in an array of 6 elements, there will be 12 operations. If we double the array length, we double the number of operations.

This technically has a Big O(2n), where n is the length of the input. But remember, Big O looks at the big picture – the worst-case scenario where the input size approaches infinity. If we pass in an array infinity items long, then here, there would be 2 * infinity operations. 2 * infinity is still just infinity – it’s just a very large number, so we can just ignore the 2 because in the grand scheme of things, that two isn’t all that significant.

We can describe this function as having a Big O(n) – “A Big O of n” - which is known as linear time complexity.

You have now seen two different Big O notations, Big O(1) and Big O(n). Now would be a good time to show you the graph of Big O Notation and compare these Big Os.

Example 3 – Quadratic time complexity: Big O(n^2)

Here’s a function called multiplyAll which accepts two arrays. It first makes sure they’re of equal length; if they are then it will continue down and multiply every number in the first array with every number in the second array and return the sum of all these products.

function multiplyAll(arr1, arr2) {
if (arr1.length !== arr2.length) return undefined
let total = 0
for (let i of arr1) {
for (let j of arr2) {
total += i * j
}
}
return total
}
let result1 = multiplyAll([1, 2], [5, 6]) // 33
let result2 = multiplyAll([1, 2, 3, 4], [5, 3, 1, 8]) // 170

It does this using two for-loops. The first for-loop loops over the first array. Then inside this for-loop, we have a second, nested for loop, which loops over every item in the second array. So, for every item we loop over in the first array, we have to loop over every item in the second array.

If we pass in two arrays of length two ([1, 2] and [5, 6]), then for the first run through the outer loop, i will point at 1. It will then loop through the second array starting with 5, and 1 * 5 will be added to the total. It will then go to the second element in the second array and do 1 * 6 and add it to the total.

Then we get to the end of that loop, so we go back to the outer loop, and increment to the second element which is 2. We then do 2 * 5 and add it to the total, then, finally, 2 * 6 and add it to the total.

As we can see, for every item in the first array, we have to loop over every single item in the second array and perform a multiplication. So the total number of operations will be the length of the first array, which is n, times the length of the second array, which is also n, because we checked to make sure they are the same length. This results in a Big O(n^2) - quadratic time complexity.

If you are incredibly astute, you may have noticed that technically this algorithm would have a Big O(3 * n^2), because for every item in arr1, we:

  • loop over every item in arr2 (n^2 operations)
  • multiply two numbers (another n^2 operations)
  • add to the total (another n^2 operations).

Therefore, the total number of operations is n^2 + n^2 + n^2 which equals 3n^2 .

But remember again, with Big O Notation, we are looking at the big picture, the worst-case scenario as the input length approaches infinity, and 3 times infinity is still infinity – a humungous number. So as the input size grows, this 3 becomes insignificant in the grand scheme of things and we simplify to say that this algorithm has a Big O(n^2) - “quadratic time-complexity”.

Let’s reintroduce the graph of Big O Notation and discuss quadratic time complexity…

But What About Built in Methods?

So far, we’ve only looked at custom functions, but it’s important to realise Big O also applies to built-in JavaScript functions, such as the array methods push, pop, unshift and shift.

In the code snippet below, we have a 4-item array, arr. If we push 5 onto the end of this array, then all we have to do is create a new place at the end of the array, give it an index, and put the value of 5 there. It doesn’t matter what the length of the array is, it will always be constant time - Big O(1). The number of operations is always the same – constant.

let arr = [1, 2, 3, 4]
// Adding and removing to the end of the array => Big (1) - constant time
arr.push(5) // [1, 2, 3, 4, 5]
arr.pop() // [1, 2, 3]

But say we want to add 0 to the front of the array with unshift(0). We would have to re-index every item in the array, as the first index (index 0) would now point to our newly added value (0). We’d have to add 1 to every index in the array as the old first item is now the second, the old second index is now the third and so on…

So unshifting and shifting have linear time complexities - Big O(n) - because the longer the input array, the more items have to be re-indexed.

// Adding and removing to front of array => Big O(n) - linear time
arr.unshift(0) // [0, 1, 2, 3, 4]
arr.shift() // [2, 3, 4]

Example 4 – Logarithmic time complexity: Big O(log(n))

What are Logarithms?

Logarithms are a mathematical concept that many of you reading this article will have either forgotten from school, or have never studied at all. Let me briefly explain what a logarithm is…

First, here is the definition of “logarithm” from Oxford Languages:

“a quantity representing the power to which a fixed number (the base) must be raised to produce a given number.”

Let’s now walk through an example to make things clear:

Find the value of x:

Log2(16) = x

In this example, 2 is known as the “base” of the logarithm. I remember it’s called the base because it’s at the base (bottom) of the word “log”. From our definition, we know that we need to find what power to raise the base by in order to get 16.

So, we can rewrite our equation, Log2(16) = x, to the following:

2x = 16

From this, we can see that x = 4, because 24 = 16:

24 = 2 * 2 * 2 * 2 = 16

Therefore:

Log2(16) = 4

Once again, a logarithm is the power to which a number (the base) must be raised in order to get some other number

Please go through the above example until you get it. Writing it out is definitely a good idea! And please don’t use the excuse “I’m not good at maths”. Nobody is just “good” at maths, it requires mental effort!

N.B. In mathematics, if the base isn’t specified, e.g. log(20), then the base is usually assumed to be 10. However, in computer science, if the base is unspecified, it is assumed to be 2, aka. “Binary Logarithm”. Remember not to fall for this trap when using a calculator!!

A logarithmic algorithm

Now you understand what a logarithm is, let’s introduce a very simple algorithm with a “Big O of log n”. Recall that n is the input size to the algorithm/function.

function logTime(arr) {
let numberOfLoops = 0
for (let i = 1; i < arr.length; i *= 2) {
numberOfLoops++
}
return numberOfLoops
}
let loopsA = logTime([1]) // 0 loops
let loopsB = logTime([1, 2]) // 1 loop
let loopsC = logTime([1, 2, 3, 4]) // 2 loops
let loopsD = logTime([1, 2, 3, 4, 5, 6, 7, 8]) // 3 loops
let loopsE = logTime(Array(16)) // 4 loops

Notice that the post-operation of the for-loop multiplies the current value of i by 2, so i goes from 1 to 2 to 4 to 8 to 16 to 32 …

In other words, it doubles with each loop.

As we can see from the examples (loopsA, loopsB, etc…), every time we double the length of the input array, the number of operations increases linearly (by 1 each time).

In simple terms, the number of operations doesn’t increase very much when we increase the size of the input.

Stating it mathematically:

For the loopsA example, the input length is 1 ([1]), so:

log(1) = 0 operations (or 0 loops)

For the loopsE example, the input length is 16:

Log(16) = 4 operations

To conclude this example, if we increase the input length from 1 to 16, the number of operations (loops) only increases from 0 to 4. To increase the number of operations by 1, we have to double the size of the input.

Back to our graph of Big O Notation and looking at the yellow line, you can see that logarithmic time complexity is the second best performing Big O. It’s better than linear time (blue), but not quite as good as constant time (green).


Visualising logarithmic time with the balanced binary tree

A great way to visualise log(n) time is via a “balanced binary tree”:




The number of nodes double with each single step down the binary tree.

The Binary Search algorithm  has a Big O (log(n)). If we input a sorted array of length 16 (ie the bottom level in the tree above), it would only take 4 steps (count to the top tree node) to find the number we were looking for.

Algorithms with logarithmic time are often “divide and conquer” style, meaning the data set is cut down/reduced upon each loop iteration. The algorithm has less data to deal with on each loop and so can find or sort things quickly.

Comment